专利摘要:
THREE-DIMENSIONAL SOUND REPRODUCTION METHOD (3D), AND THREE-DIMENSIONAL SOUND REPRODUCTION EQUIPMENT (3D). A three-dimensional (3D) sound reproduction method and equipment is provided. The method includes transmitting the sound signals through a head-related transfer filter (HRTF) corresponding to a first lift; generating a plurality and sound signals by replicating the filtered sound signals; amplify or attenuate each of the replicated sound signals based on a gain value corresponding to each of the speakers, through which the replicated sound signals will be emitted, and output the amplified or attenuated sound signals through the speakers. corresponding speakers.
公开号:BR112013000328B1
申请号:R112013000328-6
申请日:2011-07-06
公开日:2020-11-17
发明作者:Sun-min Kim;Young-Jin Park;Hyun Jo
申请人:Korea Advanced Institute Of Science And Technology;Samsung Eletronics Co., Ltd;
IPC主号:
专利说明:

TECHNICAL FIELD
Methods and equipment consistent with exemplary modalities refer to the reproduction of three-dimensional (3D) sound, and more specifically, the location of a virtual sound source for a predetermined elevation. TECHNICAL FUNDAMENTALS
With developments in video and sound processing technologies, content with high image and sound quality is being provided. Users demanding content with high image and sound quality now demand realistic images and sound and, consequently, research in 3D image and sound is being actively conducted.
3D sound 'is generated by providing a plurality of speakers in different positions on a level surface and emitting sound signals that are the same or different from one another according to the speakers so that a user can experience an effect space. However, the sound can be effectively generated from several elevations, as well as several points on the level surface. Therefore, a technology to effectively reproduce the sound signals that are generated at different levels between them is necessary. DISCLOSURE OF THE INVENTION SOLUTION TO THE PROBLEM
The present invention provides a 3D sound reproduction method and equipment for locating a virtual sound source for a predetermined elevation. ADVANTAGE EFFECTS OF THE INVENTION
According to the present modality, it is possible to provide 3D three-dimensional effect. And, according to the present modality, it is possible that the virtual sound source may be located effectively at a predetermined elevation. BRIEF DESCRIPTION OF THE DRAWINGS
The above and other characteristics and advantages of the present invention will become more evident through a detailed description of its exemplary modalities with reference to the attached drawings in which:
Figure 1 is a block diagram of a 3D sound reproduction equipment according to an exemplary modality;
Figure 2A is a block diagram of the 3D sound reproduction equipment for locating a virtual sound source for a predetermined elevation using 5 channel signals;
Figure 2B is a block diagram of a 3D sound reproduction equipment for locating a virtual sound source for a predetermined elevation using a sound signal in accordance with another exemplary modality;
Figure 3 is a block diagram of a 3D sound reproduction equipment to locate a virtual sound source for a predetermined elevation using a 5 channel signal according to another exemplary modality;
Figure 4 is a diagram showing an example of a 3D sound reproduction equipment to locate a virtual sound source for a predetermined elevation by emitting 7-channel signals through seven speakers according to an exemplary mode;
Figure 5 is a diagram showing an example of a 3D sound reproduction equipment for locating a virtual sound source for a predetermined elevation by emitting 5-channel signals through seven speakers according to an exemplary mode;
Figure 6 is a diagram showing an example of a 3D sound reproduction equipment for locating a virtual sound source for a predetermined elevation by emitting 7-channel signals through five speakers according to an exemplary mode;
Figure 7 is a diagram of a speaker system for locating a virtual sound source for a predetermined elevation according to an exemplary modality; and
Figure 8 is a flowchart illustrating a method of reproducing 3D sound according to an exemplary modality. BEST MODE FOR CARRYING OUT THE INVENTION
Exemplary modalities provide a method and equipment for reproducing 3D sound, and specifically, a method and equipment for locating a virtual sound source for a predetermined elevation.
According to an aspect of an exemplary embodiment, a method of reproducing 3D sound is provided, the method including: transmitting a sound signal through a predetermined filter generating 3D sound corresponding to a first elevation; replicating the filtered sound signal to generate a plurality of sound signals; perform at least one amplification, attenuation and delay on each of the replicated sound signals based on at least one of a gain value and a delay value corresponding to each of a plurality of speakers, through which the signals replicated sound must be emitted; and outputting the sound signals that have been subjected to at least one of the amplification, attenuation and delay processes through the corresponding speakers.
The predetermined filter may include a head-related transfer filter (HRTF).
The transmission of sound signals through the HRTF may include the transmission of at least one of an upper left channel signal representing a sound signal generated from the left side of a second elevation and an upper right channel signal representing a signal of sound generated from the right side of the second elevation through the HRTF.
The method may also include generating the upper left channel signal, and the upper right channel signal, by upward mixing of the sound signal; when the sound signal does not include the upper left channel signal, and the upper right channel signal.
The transmission of the sound signal through the HRTF may include the transmission of at least one of a front left channel signal representing a sound signal generated from a front left side and a front right channel signal representing a generated sound signal from a front right side via HRTF, when the sound signal does not include an upper left channel signal representing a sound signal generated from a left side of a second elevation and an upper right channel signal representing a signal sound generated from the right side of the second elevation.
The HRTF can be generated by dividing a first HRTF including information about a route from the first elevation to a user's ears through a second HRTF including information about a route from a speaker location, through from which the sound signal will be emitted, to the user's ears.
The emission of the sound signal can include: generating a first sound signal by mixing the sound signal that is obtained by amplifying the upper left channel signal filtered according to a first gain value with the sound signal that is obtained by amplification of the filtered right upper channel signal according to a second gain value; generate a second sound signal by mixing the sound signal that is obtained by amplifying the upper left channel signal according to the second gain value with the sound signal that is obtained by amplifying the filtered right upper channel signal according to with the first gain value; and emitting the first sound signal through a speaker arranged on a left side and emitting the second sound signal through a speaker arranged on a right side;
The emission of sound signals may include: generating a third sound signal by mixing a sound signal that is obtained by amplifying a posterior left signal representing a sound signal generated from a posterior left side according to a third gain value with a first sound signal; generating a fourth sound signal by mixing a sound signal that is obtained by amplifying a rear right signal representing a sound signal generated from a rear right side according to the third gain value with the second sound signal ; and outputting the third sound signal through a left rear speaker and a fourth sound signal through a right rear speaker.
The emission of the sound signals can also include silencing at least one of the first sound signal and the second sound signal according to a location of the first elevation, where the virtual sound source must be located.
The transmission of the sound signal through the HRTF may include: obtaining information about the location where the virtual sound source should be located; and determining the HRTF, through which the sound signal is transmitted, based on the location information.
Performing at least one of the amplification, attenuation and delay processes may include determining at least one of the gain values and delay values that will be applied to each of the replicated sound signals based on at least one of a location. real speaker, a location of a speaker, and a location of the virtual sound source.
Determining at least one of the gain value and delay value may include determining at least one of the gain value and delay value with respect to each of the replicated sound signals as a given value, when information about the listener's location is not obtained.
Determining at least one of the gain value and the delay value may include determining at least one of the gain value and the delay value with respect to each of the replicated sound signals as an equal value, when information about the listener's location is not obtained.
In accordance with an aspect of another exemplary embodiment, 3D sound reproduction equipment is provided including: a filter unit transmitting a sound signal through an HRTF corresponding to a first elevation; a replication unit generating a plurality of sound signals by replicating the filtered sound signal; an amplification / delay unit performing at least one of the amplification, attenuation and delay processes with respect to each of the replicated sound signals based on a gain value and a delay value corresponding to each of a plurality of loudspeakers. speakers, through which the replicated sound signals are to be output; and an output unit emitting the sound signals that have undergone at least one of the amplification, attenuation and delay processes through the corresponding speakers.
The predetermined filter is the head-related transfer filter (HRTF).
The filter unit can transmit at least one of an upper left channel signal representing a sound signal generated from a left side of a second elevation and an upper right channel signal representing a sound signal generated from one side second elevation right through HRTF.
The 3D sound reproduction equipment may further comprise: an upward mixing unit that generates an upper left channel signal and an upper right channel signal, when the sound signal does not include the upper left channel signal and the channel signal upper right.
The filter unit can transmit at least one of a front left channel signal representing a sound signal generated from a front left side and a front right channel signal representing a sound signal generated from a front right side through HRTF, when the sound signal does not include an upper left channel signal representing the sound signal generated from a left side of a second elevation and an upper right channel signal representing the sound signal generated from one side second elevation.
The HRTF is generated by dividing a first HRTF including information about a route from the first elevation to a user's ears through a second HRTF including information about a route from a speaker location, through the which sound signal will be emitted, to the user's ears.
The output unit comprises: the first mixing unit which generates a first sound signal by mixing a 'sound signal which is obtained by amplifying the upper left channel signal filtered according to a first gain value with a signal of sound that is obtained by amplifying the upper right channel signal filtered according to a second gain value; a second mixing which generates a second sound signal by mixing a sound signal which is obtained by amplifying the upper left channel signal filtered according to the second gain value with the sound signal which is obtained by amplifying the signal upper right channel filtered according to the first gain value; and a rendering unit that emits the first sound signal through a speaker on one left side and emits the second sound signal through a speaker on one right side.
The output unit comprises: a third mixing unit that generates a third sound signal by mixing a sound signal that is obtained by amplifying a rear left signal representing a sound signal generated from a rear left side according with a third gain value with the first sound signal; and a fourth mixing unit that generates a fourth sound signal by mixing a sound signal that is obtained by amplifying a posterior right signal representing a sound signal generated from a posterior right side according to the third value of gain with the second sound signal; wherein the rendering unit outputs the third sound signal through a left rear speaker and the fourth sound signal through a right rear speaker.
The rendering unit comprises a controller that silences at least one of the first and second sound signals according to a location on the first elevation, where the virtual soiri source must be located. MODE FOR THE INVENTION
This application claims to benefit from US Provisional Application US 61 / 362,014, filed on July 7, 2010 at the United States Patent and Trademark Office, Korean Patent Application 10-2010-0137232, filed on December 28, 2010 , and Korean Patent Application 10-2011-0034415, filed on April 13, 2011, at the Korean Intellectual Property Office, the disclosures of which are hereby incorporated in full by reference.
Next, the exemplary modalities will be described in detail with reference to the attached drawings. In this description, the term "unit" means a hardware component and / or a software component that is executed by a hardware component such as a processor.
Figure 1 is a block diagram of a 3D 100 sound reproduction equipment according to an exemplary embodiment.
The 3D sound reproduction equipment 100 includes a filter unit 110, a replication unit 120, an amplifier 130, and an output unit 140.
The filter unit 110 transmits a sound signal through a predetermined filter generating 3D sound corresponding to a predetermined elevation. The filter unit 110 can transmit a sound signal through a head-related transfer filter (HRTF) corresponding to a predetermined elevation. HRTF includes information about a route from a spatial position of a sound source to both a user's ears, that is, a frequency transmission characteristic. The HRTF makes it possible for a user to recognize 3D sound through a phenomenon whereby complex passage characteristics such as diffraction in the skin of the human head and reflection through auricles, as well as simple passage differences such as an inter-auricular level difference (ILD) and an inter-auricular time difference (ITD), are changed according to the directions of arrival of the sound. Since there is only one HRTF in each direction in a space, 3D sound can be generated due to the above characteristics.
The filter unit 110 uses the HRTF filter to model the sound being generated from a position at an elevation higher than that of the actual speakers that are arranged on a level surface. Equation 1 below is an example of HRTF used in filter unit 110. HRTF = HRTF2 / HRTFI (1) HRTF2 is HRTF representing passing information from a position of a virtual sound source to a user's ears, and HRTFi is HRTF representing passing information from a real speaker position to the user's ears. As a sound signal is emitted from the actual speaker, so that the user recognizes that the sound signal is emitted from a virtual speaker, HRTF2 corresponding to a predetermined elevation is divided by HRTFi corresponding to the surface of level (or elevation of the actual speaker).
An optimal HRTF corresponding to a predetermined elevation varies depending on each person, such as a fingerprint. However, it is impossible to calculate the HRTF for each user and apply the calculated HRTF to each user. Thus, HRTF is calculated for some users in a user group, which has similar properties (for example, physical properties such as age and height, or trends such as favorite frequency range and favorite music), and then a representative value ( for example, an average value) can be determined as the HRTF applied to all users included in the corresponding user group.
Equation 2 below is a result of filtering the sound signal using the HRTF defined in Equation 1 above. Y2 (f) = Yi (f) * HRTF (2) Yi (f) is a value converted into a frequency range from the emitted sound signal that a user hears from the actual speaker, and Y2 (f ) is a value converted into a frequency range from the sound signal that a user hears from the virtual speaker.
The filter unit 110 can filter only a few channel signals from a plurality of channel signals included in the sound signal.
The sound signal can include sound signals corresponding to a plurality of channels. Then, a 7-channel signal is defined for convenience of description.
However, the 7 channel signal is an example, and the sound signal may include a channel signal representing the sound signal generated from directions other than the seven directions that will now be described.
A central channel signal is a sound signal generated from a central front portion, and is output through a central speaker.
A front right channel signal is a sound signal generated from the right side of a front portion, and is output through a front right speaker.
A front left channel signal is a sound signal generated from the left side of the front portion, and is output through a front left speaker.
A rear right channel signal is a sound signal generated from the right side of a rear portion, and is output through a right rear speaker.
A rear left channel signal is a sound signal generated from a left side of the rear portion, and is output through a left rear speaker.
An upper right channel signal is a sound signal generated from an upper right portion, and is output through an upper right speaker.
An upper left channel signal is a sound signal generated from an upper left portion, and is output through an upper left speaker.
When the sound signal includes the upper right channel signal and the upper left channel signal, the filter unit 110 filters the upper right channel signal and the upper left channel signal. The upper right signal and the upper left signal that are filtered out are then used to model a virtual sound source that is generated from a desired elevation.
When the sound signal does not include the upper right signal and the upper left signal, the filter unit 110 filters the front right channel signal and the front left channel signal. The front right channel signal and the front left channel signal are then used to model the virtual sound source generated from a desired elevation.
In some exemplary embodiments, the sound signal that does not include the upper right channel signal and the upper left channel signal (for example, 2.1 channel signal or 5.1 channel signal) is mixed upwards to generate the upper right channel signal and the upper left channel signal. Then, the upper right channel signal and the upper left channel signal, mixed, can be filtered.
Replication unit 120 replicates the filtered channel signal on a plurality of signals. Replication unit 120 replicates the filtered channel signal as many times as the number of speakers through which the filtered channel signals are to be output. For example, when the filtered sound signal is output as the upper right channel signal, the upper left channel signal, the rear right channel signal, and the rear left channel signal, the replication unit 120 makes four replicas of the filtered channel signal. The number of replicas made by replication unit 120 may vary depending on the exemplary modalities; however, it is desirable that two or more replicates are generated so that the filtered channel signal can be output at least as the posterior right channel signal and the posterior left channel signal.
The speakers through which the upper right channel signal and the upper left channel signal will be reproduced are arranged on the level surface. As an example, the speakers can be attached just above the front speaker that reproduces the front right channel signal.
Amplifier 130 amplifies (or attenuates) the filtered sound signal according to a predetermined gain value. The gain value may vary depending on the type of the filtered sound signal.
For example, the upper right channel signal emitted through the upper right speaker is amplified according to a first gain value, and the upper right channel signal emitted through the upper left speaker is amplified according to a second gain. gain value. Here, the first gain value can be greater than the second gain value. In addition, the upper left channel signal output through the upper right speaker is amplified according to the second gain value and the upper left channel signal output through the upper left speaker is amplified according to the first value. gain so that the channel signals corresponding to the speakers, left and right, can be output.
In the related technique, an ITD method was used mainly to generate a virtual sound source at a desired position. The ITD method is a method of locating the virtual sound source to a desired position by emitting the same sound signal from a plurality of speakers with time differences. The ITD method is suitable for locating the virtual sound source in the same plane on which the actual speakers are located. However, the ITD method is not an appropriate way to locate the virtual sound source for a position that is located higher than an elevation of the actual speaker.
In the exemplary modes, the same sound signal is emitted from several speakers with different gain values. In this way, according to an exemplary modality, the virtual sound source can easily be located at an elevation that is higher than that of the actual speaker, or even a certain elevation regardless of the elevation of the actual speaker.
Output unit 140 emits one or more amplified channel signals through corresponding speakers. Output unit 140 may include a mixer (not shown) and a rendering unit (not shown).
The mixer mixes one or more channel signals.
The mixer mixes the upper left channel signal that is amplified according to the first gain value with the upper right channel signal that is amplified according to the second gain value to generate a first sound component, and mixes the signal upper left channel signal that is amplified according to the second gain value and the upper right channel signal that is amplified according to the first gain value to generate a second sound component.
In addition, the mixer mixes the rear left channel signal which is amplified according to a third gain value with the first sound component to generate a third sound component, and mixes the rear right channel signal which is amplified accordingly. with the third gain value with the second sound component to generate a fourth sound component.
The rendering unit renders mixed or unmixed sound components and outputs them to the corresponding speakers.
The rendering unit outputs the first sound component to the upper left speaker, and outputs the second sound component to the upper right speaker. If there is no upper left speaker or no upper right speaker, the rendering unit can output the first sound component to the front left speaker and can output the second sound component to the front right speaker.
In addition, the rendering unit outputs the third sound component to the rear left speaker, and outputs the fourth sound component to the rear right speaker.
The operations of the replication unit 120, amplifier 130, and output unit 140 may vary depending on the number of channel signals included in the sound signal and the number of speakers. Examples of operations of the 3D sound reproduction equipment according to the number of channel signals and speakers will be described later with reference to Figures 4 to 6.
Figure 2A is a block diagram of a 3D sound reproduction equipment 100 for locating a virtual sound source for a predetermined elevation using 5 channel signals according to an exemplary mode.
An upward mixer 210 upwardly mixes the 5 channel signals 201 to generate 7 channel signals including an upper left channel signal 202 and an upper right channel signal 203.
The upper left channel signal 202 is introduced in a first HRTF 111, and the upper right channel signal 203 is introduced in a second HRTF 112.
The first HRTF 111 includes information about a passage from a left virtual sound source to the user's ears, and the second HRTF 112 includes information about a passage from a right virtual sound source to the user's ears. The first HRTF 111 and the second HRTF 112 are filters to model the virtual sound sources at a predetermined elevation that is higher than that of real speakers.
The upper left channel signal and the upper right channel signal passing through the first HRTF 111 and the second HRTF 112 are introduced into replication units 121 and 122.
Each of the replication units, 121 and 122, makes two replicates of each of the upper left channel signal and the upper right channel signal that are transmitted via HRTFs 111 and 112. The upper left channel signal and the replicated upper right channels are transferred to the third amplifiers 131, 132 and 133.
The first amplifier 131 and the second amplifier 132 amplify the replicated upper left signal and the replicated upper right signal according to the speaker by emitting the signal and the type of channel signals. In addition, the third amplifier 133 amplifies at least one channel signal included in the 5 channel signals 201.
In some exemplary embodiments, the 3D sound reproduction equipment 100 may include a first delay unit (not shown) and a second delay unit (not shown) instead of the first and second amplifier 131 and 132, or may include all, the first and second amplifiers 131 and 132, and the first and second delay units. This is because the same result as that of varying the gain value can be obtained when delayed values of the filtered sound signals vary depending on the speakers.
Output unit 140 mixes the amplified upper left channel signal, the upper right channel signal, and the 5 channel signal 201 to output the mixed signals as 7 channel 205 signals. The 7 channel 205 signals are output for each one of the speakers.
In another exemplary embodiment, when 7 channel signals are input, the upward mixer 210 can be omitted.
In another exemplary embodiment, the 3D 100 sound reproduction equipment may include a filter determination unit (not shown) and a delay / amplification coefficient determination unit (not shown).
The filter determination unit selects an appropriate HRTF according to a position where the virtual sound source will be located (ie, an elevation angle and a horizontal angle). The filter determination unit can select an HRTF corresponding to the virtual sound source using mapping information between the location of the virtual sound source and the HRTF. The location information of the virtual sound source can be received through other modules, such as applications (software or hardware), or can be entered from the user. For example, in a gaming application, a location where the virtual sound source is located can vary depending on the time, and the filter determination unit can change the HRTF according to the variation of the virtual sound source location.
The delay / amplification coefficient determination unit can determine at least one of an amplification (or attenuation) coefficient and a delay coefficient of the replicated sound signal based on at least one of the actual speaker location, one location of the virtual sound source and location of a listener. If the delay / amplification coefficient determination unit does not recognize the listener's location information in advance, the delay / amplification coefficient determination unit can select at least one of a predetermined amplification coefficient and a predetermined delay coefficient.
Figure 2B is a block diagram of a 3D sound reproduction equipment 100 for locating a virtual sound source for a predetermined elevation using a sound signal in accordance with another exemplary embodiment.
In Figure 2B, a first channel signal that is included in a sound signal will be described for convenience of description. However, the present exemplary embodiment can be applied to other channel signals included in the sound signal.
The 3D sound reproduction equipment 100 may include a first HRTF 211, a replication unit 221, and an amplification / delay unit 231.
A first HRTF 211 is selected based on the location information of the virtual sound source, and the first channel signal is transmitted via the first HRTF 211. The location information of the virtual sound source can include elevation angle information and information horizontal angle.
Replication unit 221 replicates the first channel signal after being filtered on one or more sound signals. In Figure 2B, it is assumed that replication unit 221 replicates the first channel signal as many times as the number of actual speakers.
The amplification / delay unit 231 determines the amplification / delay coefficients of the first channel signals replicated respectively corresponding to the speakers, based on at least one of the actual speaker location information, a listener's location information, and location information for the virtual sound source. The amplification / delay unit 231 amplifies / attenuates the first replicated channel signals based on the determined amplification (or attenuation) coefficients, or delays the first replicated channel signal based on the delay coefficient. In an exemplary embodiment, the amplification / delay unit 231 can simultaneously perform amplification (or attenuation) and delay of the first replicated channel signals based on the determined amplification (or attenuation) coefficients and determined delay coefficients.
The amplification / delay unit 231 generally determines the amplification / delay coefficient of the first replicated channel signal for each of the speakers; however, the amplifier / delay unit 231 can determine the amplifier / delay coefficients of the speakers so that they are the same when the listener's location information is not obtained and thus the first channel signals that are equal each other can be output via the speakers respectively. In particular, when the amplifier / delay unit 231 does not obtain the location information from the listener, the amplifier / delay unit 231 can determine the amplification / delay coefficient for each of the speakers as a predetermined value (or an arbitrary value) ).
Figure 3 is a block diagram of a 3D sound reproduction equipment 100 for locating a virtual sound source for a predetermined elevation using 5 channel signals according to another exemplary modality. A signal distribution unit 310 extracts a front right channel signal 302 and a front left channel signal 303 from the 5 channel signal, and transfers the extracted signals to the first HRTF 111 and the second HRTF 112.
The 3D 100 sound reproduction equipment of the present exemplary modality is the same as that described with reference to Figure 2 except that the sound components applied to the filtration units 111 and 112, the replication units 121 and 122, and the amplifiers 131 , 132 and 133 are the front right channel signal 302 and the front left channel signal 303. Therefore, detailed descriptions of the current exemplary modality of the 3D 100 sound reproduction equipment will not be provided here.
Figure 4 is a diagram showing an example of a 3D sound reproduction equipment 100 for locating a virtual sound source for a predetermined elevation by emitting 7-channel signals through seven speakers according to another exemplary modality.
Figure 4 will be described based on the sound signals introduced and then described based on the sound signals emitted through the speakers.
Sound signals including a front left channel signal, an upper left channel signal, a rear left channel signal, a center channel signal, a rear right channel signal, an upper right channel signal, and a front right channel are introduced into the 3D 100 sound reproduction equipment.
The front left channel signal is mixed with the center channel signal which is attenuated by a B factor and is then transferred to a front left speaker.
The upper left channel signal passes through an HRTF corresponding to an elevation that is 30 (higher than that of the upper left speaker and is replicated in four channel signals.
Two upper left channel signals are amplified by a factor A and then mixed with the upper right channel signal. In some exemplary embodiments, after mixing the upper left channel signal that is amplified by factor A with the upper right channel signal, the mixed signal can be replicated in two signals. One of the mixed signals is amplified by a D factor and then mixed with the rear left channel signal and output through the rear left speaker. The other of the mixed signals are amplified by an E factor, and then output through the upper left speaker.
Two left upper channel signals are mixed with the upper right channel signal which is amplified by fact A. One of the mixed signals is amplified by factor D and then mixed with the rear right channel signal and emitted through the loudspeaker. posterior right speaker. The other of the mixed signals is amplified by factor E, and is output via the upper right speaker.
The rear left channel signal is mixed with the upper right channel signal that is amplified by the D factor and the upper left channel signal that is amplified by a D factor (A, and is output via the rear left speaker.
The central channel signal is replicated in three signals. One of the replicated central channel signals is attenuated by the B factor and is then mixed with the front left channel signal and output through the front left speaker. Another replicated central channel signal is attenuated by factor B, and after that it is mixed with the front right channel signal and emitted through the front right speaker. The other of the replicated central channel signals is attenuated by a C factor and is then output through the central speaker.
The rear right channel signal is mixed with the left upper caries signal that is amplified by the D factor and the upper right channel signal that is amplified by the D factor (A, and then it is emitted through the rear right speaker.
The upper right signal passes through an HRTF corresponding to an elevation that is 30 (greater than that of the upper right speaker and is then replicated in four signals. Two upper right channel signals are mixed with the 6th channel signal. upper left which is amplified by factor A. One of the mixed signals is amplified by factor D, and is mixed with the left rear channel signal and output through the left rear speaker. The other of the mixed signals is amplified by factor E, and is output via the upper left speaker.
Two replicated upper right channel signals are amplified by factor A, and are mixed with the upper left channel signals. One of the mixed signals is amplified by the D factor, and is mixed with the rear right channel signal and output through the rear right speaker. The other of the mixed signals is amplified by the E factor, and is output via the upper right speaker.
The front right channel signal is mixed with the center channel signal which is attenuated by the B factor, and is output through the front right speaker.
In the following, sound signals that are finally emitted through the speakers after the processes described above are as follows: (front left channel signal + central channel signal (B) is emitted through the front left speaker; ( rear left channel signal + D ((upper left channel signal (A + upper right channel signal)) is output through the rear left speaker; (E ((upper left channel signal (A + upper channel signal) right)) is output through the upper left speaker; (C (central channel signal) is output through the central speaker; (E (upper right channel signal (A + upper left channel signal)) is output through the upper right speaker; (rear right channel signal + D ((upper right channel signal (A + upper left channel signal)) is output through the rear right speaker; and (front right channel signal + central channel signal (B) is emitted through the loudspeaker di frontal right.
In Figure 4, the gain values to amplify or attenuate the channel signals are merely examples, and various gain values that can cause the left speaker and the right speaker to emit corresponding channel signals can be used. In addition, in some exemplary modes, gain values for outputting channel signals that do not correspond to the speakers, through the speakers, left and right, can be used.
Figure 5 is a diagram showing an example of a 3D sound reproduction equipment 100 for locating a virtual sound source for a predetermined elevation by emitting 5-channel signals through seven speakers according to another exemplary modality.
The 3D sound reproduction equipment shown in Figure 5 is the same as that shown in Figure 4 except that the sound components introduced in an HRTF are a front left channel signal and a front right channel signal. Therefore, the sound signals emitted through the speakers are as follows: (front left channel signal + center channel signal (B) is emitted through the front left speaker; (rear left channel signal + D ( (front left channel signal (A + front right channel signal)) is output through the rear left speaker; (E ((front left channel signal (A + front right channel signal)) is output through the top upper left speaker; (C (central channel signal) is output through the central speaker; (E ((front right channel signal (A + front left channel signal)) is output through the upper right speaker ; (rear right channel signal + D ((front right channel signal (A + front left channel signal)) is output through the rear right speaker; and (front right channel signal + center channel signal (B ) is output through the front right speaker.
Figure 6 is a diagram showing an example of a 3D sound reproduction equipment 100 for locating a virtual sound source for a predetermined elevation by emitting 7-channel signals through five speakers, according to another exemplary modality.
The 3D 100 sound reproduction equipment in Figure 6 is the same as that shown in Figure 4 except that the output signals that are supposed to be output through the upper left speaker (the speaker for the channel signal upper left speaker 413) and the upper right speaker (the speaker for the upper right channel signal 415) in Figure 4, are output through the front left speaker (the speaker for the front left channel signal 611) and the front right speaker (the speaker for the front right channel signal 615) respectively. Therefore, the sound signals emitted through the speakers are as follows: (front left channel signal + (center channel signal (B) + E ((front left channel signal (A + front right channel))) is output through the front left speaker; (rear left channel signal + D ((left channel signal: front (A + front right channel signal)) is output through the rear left speaker; (C (rear signal) center channel) is output through the center speaker; (E ((front right channel signal (A + front left channel signal)) is output through the upper right speaker; (rear right channel signal + D ( (front right channel signal (A + front left channel signal)) is output through the rear right speaker; and (front right channel signal + (center channel signal (B) + E ((right channel signal front (A + front left channel signal)) is output via the front right speaker.
Figure 7 is a diagram of a speaker system for locating a virtual sound source for a predetermined elevation according to an exemplary modality.
The speaker system in Figure 7 includes a center speaker 710, a front left speaker 721, a front right speaker 722, a rear left speaker 731, and a rear right speaker 732.
As described above with reference to Figures 4 to 6, to locate a virtual sound source for a predetermined elevation, an upper left channel signal and an upper right channel signal that passed through a filter are amplified or attenuated by gain values which are different according to the speakers and then are introduced into the front left speaker 721, the front right speaker 722, the rear left speaker 731, and the rear right speaker 732. '
Although not shown in Figure 7, an upper left speaker (not shown) and an upper right speaker (not shown) can be arranged above the front left speaker 721 and the front right speaker 722. In this case , the upper left channel signal and the upper right channel signal passing through the filter are amplified by the gain values that are different according to the speakers and introduced in the upper left speaker (not shown), in the speaker. upper right speaker (not shown), rear left speaker 731, and rear right speaker 732.
A user recognizes that the virtual sound source is located to a predetermined elevation when the upper left channel signal and the upper right channel signal that are filtered out are output through one or more speakers in the speaker system. Here, when the filtered upper left channel signal or the upper right channel signal is muted on one or more speakers, a location of the virtual sound source in a left and right direction can be adjusted.
When the virtual sound source must be located in a central portion at a predetermined elevation, all of the front left speaker 721, the front right speaker 722, the rear left speaker 731, and the right speaker posterior 732 emit the filtered upper left and upper right channel signals; or only the rear left speaker 731 and rear right speaker 732 can output the filtered upper left and upper right channel signals. In some exemplary modalities, at least one of the upper left and upper right, filtered signals can be emitted through the central speaker 710. However, the central speaker 710 does not contribute to adjusting the location of the virtual sound source in the left and right direction.
When it is desired that the virtual sound source is located 'on a right side at a predetermined elevation, the front right speaker 722, the rear left speaker 731, and the rear right speaker 732 can output the signals upper left and upper right, filtered.
When it is desired that the virtual sound source is located on a left side at a predetermined elevation, the front left speaker 721, the rear left speaker 731, and the rear right speaker 732 can output the channel signals upper left and upper right, filtered.
Even when it is desired that the virtual sound source is located on the right or left side at the predetermined elevation, the upper left and upper right, filtered signals emitted through the rear left speaker 731 and the rear right speaker 732 may not be silenced.
In some exemplary modes, the location of the virtual sound source in the left and right direction can be adjusted by adjusting the gain value to amplify or attenuate the upper left and upper right channel signals, without silencing the upper, left and right, filtered through one or more speakers.
Figure 8 is a flowchart illustrating a method of reproducing 3D sound according to an exemplary modality.
In operation S818, a sound signal is transmitted via an HRTF corresponding to a predetermined elevation.
In operation S820, the filtered sound signal is replicated to generate one or more replica sound signals.
In the S830 operation, each of one or more of the replica sound signals is amplified according to a gain value corresponding to a speaker, through which the sound signal will be emitted.
In operation S840, the one or more amplified sound signals are emitted through the corresponding speakers respectively.
In the related technique, an upper speaker is installed at a desired elevation to emit a sound signal being generated at the elevation; however, it is not easy to install the top speaker on the ceiling. Thus, the upper speaker is usually placed above the front speaker, which may result in a desired elevation not being reproduced.
When the virtual sound source is located in a desired location using an HRTF, the location of the virtual sound source can be effectively performed in the left and right direction in a horizontal plane. However, the location using the HTRF is not suitable for locating the virtual sound source for an elevation that is higher or lower than that of the actual speakers.
In comparison, according to the exemplary modalities, one or more channel signals passing through the HRTF are amplified by the gain values that are different from each other according to the speakers, and are output through the speakers. In this way, the virtual sound source can be effectively located to a predetermined elevation using the speakers arranged in the horizontal plane.
Exemplary modalities can be written as computer programs and can be implemented on general-purpose digital computers that run programs that are stored on a computer-readable recording medium.
Examples of a computer-readable recording medium include magnetic storage media (for example, ROM, floppy disks, hard drives, etc.), and optical recording media (for example, CD-ROMs or DVDs).
Although exemplary modalities have been particularly shown and described, it will be understood by those of ordinary skill in the art that various changes in form and details can be made in the same 10 without departing from the essence and scope of the inventive concept as defined by the following claims.
权利要求:
Claims (19)
[0001]
1. Method for rendering an audio signal, FEATURED for understanding: receiving audio signals from the input channel and a configuration of the input channel; select a type of filter based on the first head-related transfer function (HRTF) according to a first signal height of the input channel between the audio signals of the input channel, where the first signal height of the input channel is identified by azimuth and elevation; obtaining primary gains according to the first signal height of the input channel and location information of a plurality of audio signals from the output channel; downward mixing of the audio signals from the input channel, based on the first type of filter based on HRTF and the first gains, to provide high sound through the plurality of audio signals from the output channel; and outputting the plurality of audio signals from the output channel through a plurality of output speakers, wherein a configuration of the plurality of audio signals from the output channel is a 5.0 channel configuration, wherein the plurality The output loudspeaker is located in a horizontal plane, and the plurality of audio signals from the output channel comprise surround output signals.
[0002]
2. Method, according to claim 1, CHARACTERIZED in that the first type of filter based on HRTF is selected based on a virtual exit location.
[0003]
Method according to claim 1, characterized in that the first signal height of the input channel is output to at least two of the plurality of audio signals from the output channel.
[0004]
4. Method according to claim 1, CHARACTERIZED by additionally comprising: selecting a second type of filter based on HRTF according to a second signal height of the input channel among the audio signals of the input channel, in which the second signal height of the input channel is identified by azimuth and elevation; and obtain secondary gains according to the second signal height of the input channel, in which the first type of HRTF-based filter and the second type of HRTF-based filter are selected independently, in which the primary gains and the secondary gains are obtained independently, in which the elevation rendering is performed on the audio signals of the input channel based on the second type of filter based on HRTF and secondary gains.
[0005]
5. Method according to claim 1, CHARACTERIZED by a signal from the output channel 'surround's being identified by at least one azimuth 110 degrees and azimuth -110 degrees.
[0006]
6. Method according to claim 1, CHARACTERIZED by a surround channel output signal between the surround channel output signals being identified by a 0 degree elevation.
[0007]
7. Method according to claim 1, CHARACTERIZED in that the first signal height of the input channel is located in the upper center.
[0008]
8. Method according to claim 1, CHARACTERIZED by the gains for a signal from the left rear channel and from a signal from the right rear channel included in the signals of the surround output channel between the first gains being positive values other than zero .
[0009]
9. Method according to claim 1, CHARACTERIZED in that the configuration of the input channel comprises the azimuth and elevation of the first signal height of the input channel.
[0010]
10. Device for rendering an audio signal, the device FEATURED for understanding: a receiver, implemented by at least one processor, configured to receive audio signals from the input channel and a configuration of the input channel; a renderer, implemented by at least one processor, configured to: select a first type of head-related transfer filter (HRTF) according to a first signal height of the input channel between the audio signals of the input channel, in that the first signal height of the input channel is identified by an azimuth and elevation, configured to obtain primary gains according to the first signal height of the input channel and location information of a plurality of audio signals from the input channel output, and configured to perform descending mixing on the audio signals of the input channel, based on the first type of filter based on HRTF and the primary gains, to provide high sound through the plurality of audio signals of the output channel and emit the plurality of audio signals from the output channel through a plurality of output speakers, wherein the plurality of output speakers is located in a horizontal plane, in which and a configuration of the plurality of audio signals from the output channel is a 5.0 channel configuration and wherein the plurality of audio signals from the output channel comprises signals from the surround output channel.
[0011]
11. Apparatus according to claim 10, CHARACTERIZED in that the first type of filter based on HRTF is selected based on a virtual exit location.
[0012]
Apparatus according to claim 10, characterized in that the first signal height of the input channel is output to at least two of the plurality of audio signals from the output channel.
[0013]
13. Apparatus according to claim 10, CHARACTERIZED in that the renderer is additionally configured to select a second type of filter based on HRTF according to a second signal height of the input channel among the audio signals of the input channel, where the second signal height of the input channel is identified by azimuth and elevation and obtain secondary gains according to the second signal height of the input channel, where the first type of filter based on HRTF and the second type filter based on HRTF are independently selected, where primary gains and secondary gains are obtained independently, where elevation rendering is performed at the second signal height of the input channel based on the second type of filter based on HRTF and secondary gains.
[0014]
Apparatus according to claim 10, characterized by a surround channel output signal identified by at least one of 110 degree azimuth and -110 degree azimuth.
[0015]
Apparatus according to claim 10, characterized by a surround channel output signal identified by an elevation of 0 degrees.
[0016]
16. Apparatus according to claim 10, CHARACTERIZED in that the first signal height of the input channel is located in the upper center.
[0017]
Apparatus according to claim 10, CHARACTERIZED by the gains for a left rear channel signal and a right rear channel signal included in the surround output channel signals between the primary gains being positive values other than zero .
[0018]
Apparatus according to claim 10, CHARACTERIZED in that the configuration of the input channel comprises the azimuth and elevation of the first signal height of the input channel.
[0019]
19. Computer readable, non-transitory recording medium, CHARACTERIZED for incorporating a computer program in it to execute the method of claim 1.
类似技术:
公开号 | 公开日 | 专利标题
BR112013000328B1|2020-11-17|three-dimensional sound reproduction method |, and three-dimensional sound reproduction equipment |
ES2606678T3|2017-03-27|Display of reflected sound for object-based audio
US11140501B2|2021-10-05|Reverberation generation for headphone virtualization
JP5612126B2|2014-10-22|System and method for processing an input signal for generating a 3D audio effect
US8626321B2|2014-01-07|Processing audio input signals
US7756275B2|2010-07-13|Dynamically controlled digital audio signal processor
JP6179862B2|2017-08-16|Audio signal reproducing apparatus and audio signal reproducing method
WO2018072214A1|2018-04-26|Mixed reality audio system
US10887717B2|2021-01-05|Method for acoustically rendering the size of sound a source
King et al.2016|A Survey of Suggested Techniques for Height Channel Capture in Multichannel Recording
Sazdov2011|The influence of sub-woofer frequencies within a multi-channel loudspeaker configuration on the perception of spatial attributes in a concert hall environment
Lee et al.2004|Reduction of sound localization error for non-individualized HRTF by directional weighting function
Wu et al.2014|Ambidio: Sound Stage Width Extension for Internal Laptop Loudspeakers
Sousa2011|The development of a'Virtual Studio'for monitoring Ambisonic based multichannel loudspeaker arrays through headphones
同族专利:
公开号 | 公开日
AU2017200552B2|2018-05-10|
RU2719283C1|2020-04-17|
AU2017200552A1|2017-02-23|
US20120008789A1|2012-01-12|
CN105246021A|2016-01-13|
EP2591613A4|2015-10-07|
KR101954849B1|2019-03-07|
RU2015134326A|2018-12-24|
CA2804346A1|2012-01-12|
CN105246021B|2018-04-03|
WO2012005507A3|2012-04-26|
CN103081512A|2013-05-01|
RU2015134326A3|2019-04-10|
WO2012005507A2|2012-01-12|
KR20120004916A|2012-01-13|
AU2015207829A1|2015-08-20|
EP2591613B1|2020-02-26|
AU2015207829B2|2016-10-27|
MX2013000099A|2013-03-20|
JP6337038B2|2018-06-06|
KR102194264B1|2020-12-22|
AU2011274709A1|2013-01-31|
JP2013533703A|2013-08-22|
RU2694778C2|2019-07-16|
RU2564050C2|2015-09-27|
KR20120004909A|2012-01-13|
US10531215B2|2020-01-07|
JP2016129424A|2016-07-14|
KR20190024940A|2019-03-08|
SG186868A1|2013-02-28|
BR112013000328A2|2017-06-20|
EP2591613A2|2013-05-15|
RU2013104985A|2014-08-20|
AU2018211314B2|2019-08-22|
KR20200142494A|2020-12-22|
CA2804346C|2019-08-20|
AU2018211314A1|2018-08-23|
AU2015207829C1|2017-05-04|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JP3059191B2|1990-05-24|2000-07-04|ローランド株式会社|Sound image localization device|
JPH05191899A|1992-01-16|1993-07-30|Pioneer Electron Corp|Stereo sound device|
US5173944A|1992-01-29|1992-12-22|The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration|Head related transfer function pseudo-stereophony|
US5717765A|1994-03-07|1998-02-10|Sony Corporation|Theater sound system with upper surround channels|
US5596644A|1994-10-27|1997-01-21|Aureal Semiconductor Inc.|Method and apparatus for efficient presentation of high-quality three-dimensional audio|
FR2738099B1|1995-08-25|1997-10-24|France Telecom|METHOD FOR SIMULATING THE ACOUSTIC QUALITY OF A ROOM AND ASSOCIATED AUDIO-DIGITAL PROCESSOR|
US5742689A|1996-01-04|1998-04-21|Virtual Listening Systems, Inc.|Method and device for processing a multichannel signal for use with a headphone|
US6421446B1|1996-09-25|2002-07-16|Qsound Labs, Inc.|Apparatus for creating 3D audio imaging over headphones using binaural synthesis including elevation|
KR0185021B1|1996-11-20|1999-04-15|한국전기통신공사|Auto regulating apparatus and method for multi-channel sound system|
US6078669A|1997-07-14|2000-06-20|Euphonics, Incorporated|Audio spatial localization apparatus and methods|
GB9726338D0|1997-12-13|1998-02-11|Central Research Lab Ltd|A method of processing an audio signal|
AUPP271598A0|1998-03-31|1998-04-23|Lake Dsp Pty Limited|Headtracked processing for headtracked playback of audio signals|
GB2337676B|1998-05-22|2003-02-26|Central Research Lab Ltd|Method of modifying a filter for implementing a head-related transfer function|
WO2000019415A2|1998-09-25|2000-04-06|Creative Technology Ltd.|Method and apparatus for three-dimensional audio display|
GB2342830B|1998-10-15|2002-10-30|Central Research Lab Ltd|A method of synthesising a three dimensional sound-field|
US7085393B1|1998-11-13|2006-08-01|Agere Systems Inc.|Method and apparatus for regularizing measured HRTF for smooth 3D digital audio|
US6442277B1|1998-12-22|2002-08-27|Texas Instruments Incorporated|Method and apparatus for loudspeaker presentation for positional 3D sound|
JP2001028799A|1999-05-10|2001-01-30|Sony Corp|Onboard sound reproduction device|
GB2351213B|1999-05-29|2003-08-27|Central Research Lab Ltd|A method of modifying one or more original head related transfer functions|
KR100416757B1|1999-06-10|2004-01-31|삼성전자주식회사|Multi-channel audio reproduction apparatus and method for loud-speaker reproduction|
US6839438B1|1999-08-31|2005-01-04|Creative Technology, Ltd|Positional audio rendering|
US7031474B1|1999-10-04|2006-04-18|Srs Labs, Inc.|Acoustic correction apparatus|
JP2001275195A|2000-03-24|2001-10-05|Onkyo Corp|Encode.decode system|
JP2002010400A|2000-06-21|2002-01-11|Sony Corp|Audio apparatus|
GB2366975A|2000-09-19|2002-03-20|Central Research Lab Ltd|A method of audio signal processing for a loudspeaker located close to an ear|
JP3388235B2|2001-01-12|2003-03-17|松下電器産業株式会社|Sound image localization device|
WO2002078389A2|2001-03-22|2002-10-03|Koninklijke Philips Electronics N.V.|Method of deriving a head-related transfer function|
EP1371267A2|2001-03-22|2003-12-17|Koninklijke Philips Electronics N.V.|Method of reproducing multichannel sound using real and virtual speakers|
CN100539737C|2001-03-27|2009-09-09|1...有限公司|Produce the method and apparatus of sound field|
ITMI20011766A1|2001-08-10|2003-02-10|A & G Soluzioni Digitali S R L|DEVICE AND METHOD FOR SIMULATING THE PRESENCE OF ONE OR MORE SOURCES OF SOUNDS IN VIRTUAL POSITIONS IN THE THREE-DIM SOUND SPACE|
JP4692803B2|2001-09-28|2011-06-01|ソニー株式会社|Sound processor|
GB0127778D0|2001-11-20|2002-01-09|Hewlett Packard Co|Audio user interface with dynamic audio labels|
US7116788B1|2002-01-17|2006-10-03|Conexant Systems, Inc.|Efficient head related transfer function filter generation|
US20040105550A1|2002-12-03|2004-06-03|Aylward J. Richard|Directional electroacoustical transducing|
US7391877B1|2003-03-31|2008-06-24|United States Of America As Represented By The Secretary Of The Air Force|Spatial processor for enhanced performance in multi-talker speech displays|
KR100574868B1|2003-07-24|2006-04-27|엘지전자 주식회사|Apparatus and Method for playing three-dimensional sound|
US7680289B2|2003-11-04|2010-03-16|Texas Instruments Incorporated|Binaural sound localization using a formant-type cascade of resonators and anti-resonators|
DE102004010372A1|2004-03-03|2005-09-22|Gühring, Jörg, Dr.|Tool for deburring holes|
JP2005278125A|2004-03-26|2005-10-06|Victor Co Of Japan Ltd|Multi-channel audio signal processing device|
US7561706B2|2004-05-04|2009-07-14|Bose Corporation|Reproducing center channel information in a vehicle multichannel audio system|
JP2005341208A|2004-05-27|2005-12-08|Victor Co Of Japan Ltd|Sound image localizing apparatus|
KR100644617B1|2004-06-16|2006-11-10|삼성전자주식회사|Apparatus and method for reproducing 7.1 channel audio|
US7599498B2|2004-07-09|2009-10-06|Emersys Co., Ltd|Apparatus and method for producing 3D sound|
US8793125B2|2004-07-14|2014-07-29|Koninklijke Philips Electronics N.V.|Method and device for decorrelation and upmixing of audio channels|
KR100608002B1|2004-08-26|2006-08-02|삼성전자주식회사|Method and apparatus for reproducing virtual sound|
US7283634B2|2004-08-31|2007-10-16|Dts, Inc.|Method of mixing audio channels using correlated outputs|
JP2006068401A|2004-09-03|2006-03-16|Kyushu Institute Of Technology|Artificial blood vessel|
KR20060022968A|2004-09-08|2006-03-13|삼성전자주식회사|Sound reproducing apparatus and sound reproducing method|
KR101118214B1|2004-09-21|2012-03-16|삼성전자주식회사|Apparatus and method for reproducing virtual sound based on the position of listener|
US8204261B2|2004-10-20|2012-06-19|Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.|Diffuse sound shaping for BCC schemes and the like|
EP1815716A4|2004-11-26|2011-08-17|Samsung Electronics Co Ltd|Apparatus and method of processing multi-channel audio input signals to produce at least two channel output signals therefrom, and computer readable medium containing executable code to perform the method|
US7928311B2|2004-12-01|2011-04-19|Creative Technology Ltd|System and method for forming and rendering 3D MIDI messages|
JP4988716B2|2005-05-26|2012-08-01|エルジーエレクトロニクスインコーポレイティド|Audio signal decoding method and apparatus|
JP4685106B2|2005-07-29|2011-05-18|ハーマンインターナショナルインダストリーズインコーポレイテッド|Audio adjustment system|
US8027477B2|2005-09-13|2011-09-27|Srs Labs, Inc.|Systems and methods for audio processing|
WO2007031905A1|2005-09-13|2007-03-22|Koninklijke Philips Electronics N.V.|Method of and device for generating and processing parameters representing hrtfs|
BRPI0616057A2|2005-09-14|2011-06-07|Lg Electronics Inc|method and apparatus for decoding an audio signal|
KR100739776B1|2005-09-22|2007-07-13|삼성전자주식회사|Method and apparatus for reproducing a virtual sound of two channel|
US8340304B2|2005-10-01|2012-12-25|Samsung Electronics Co., Ltd.|Method and apparatus to generate spatial sound|
KR100636251B1|2005-10-01|2006-10-19|삼성전자주식회사|Method and apparatus for spatial stereo sound|
JP2007116365A|2005-10-19|2007-05-10|Sony Corp|Multi-channel acoustic system and virtual loudspeaker speech generating method|
KR100739798B1|2005-12-22|2007-07-13|삼성전자주식회사|Method and apparatus for reproducing a virtual sound of two channels based on the position of listener|
KR100677629B1|2006-01-10|2007-02-02|삼성전자주식회사|Method and apparatus for simulating 2-channel virtualized sound for multi-channel sounds|
JP2007228526A|2006-02-27|2007-09-06|Mitsubishi Electric Corp|Sound image localization apparatus|
WO2007101958A2|2006-03-09|2007-09-13|France Telecom|Optimization of binaural sound spatialization based on multichannel encoding|
US8374365B2|2006-05-17|2013-02-12|Creative Technology Ltd|Spatial audio analysis and synthesis for binaural reproduction and format conversion|
US9697844B2|2006-05-17|2017-07-04|Creative Technology Ltd|Distributed spatial audio decoder|
GB2467668B|2007-10-03|2011-12-07|Creative Tech Ltd|Spatial audio analysis and synthesis for binaural reproduction and format conversion|
JP4914124B2|2006-06-14|2012-04-11|パナソニック株式会社|Sound image control apparatus and sound image control method|
US7876904B2|2006-07-08|2011-01-25|Nokia Corporation|Dynamic decoding of binaural audio signals|
US8116458B2|2006-10-19|2012-02-14|Panasonic Corporation|Acoustic image localization apparatus, acoustic image localization system, and acoustic image localization method, program and integrated circuit|
JP5209637B2|2006-12-07|2013-06-12|エルジーエレクトロニクスインコーポレイティド|Audio processing method and apparatus|
KR101368859B1|2006-12-27|2014-02-27|삼성전자주식회사|Method and apparatus for reproducing a virtual sound of two channels based on individual auditory characteristic|
KR20080079502A|2007-02-27|2008-09-01|삼성전자주식회사|Stereophony outputting apparatus and early reflection generating method thereof|
US9197977B2|2007-03-01|2015-11-24|Genaudio, Inc.|Audio spatialization and environment simulation|
US8290167B2|2007-03-21|2012-10-16|Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.|Method and apparatus for conversion between multi-channel audio formats|
US7792674B2|2007-03-30|2010-09-07|Smith Micro Software, Inc.|System and method for providing virtual spatial sound with an audio visual player|
JP2008312034A|2007-06-15|2008-12-25|Panasonic Corp|Sound signal reproduction device, and sound signal reproduction system|
EP2158791A1|2007-06-26|2010-03-03|Koninklijke Philips Electronics N.V.|A binaural object-oriented audio decoder|
DE102007032272B8|2007-07-11|2014-12-18|Institut für Rundfunktechnik GmbH|A method of simulating headphone reproduction of audio signals through multiple focused sound sources|
JP4530007B2|2007-08-02|2010-08-25|ヤマハ株式会社|Sound field control device|
JP2009077379A|2007-08-30|2009-04-09|Victor Co Of Japan Ltd|Stereoscopic sound reproduction equipment, stereophonic sound reproduction method, and computer program|
US8509454B2|2007-11-01|2013-08-13|Nokia Corporation|Focusing on a portion of an audio scene for an audio signal|
EP2258120B1|2008-03-07|2019-08-07|Sennheiser Electronic GmbH & Co. KG|Methods and devices for reproducing surround audio signals via headphones|
CN101977982B|2008-03-27|2012-10-24|大金工业株式会社|Fluorine-containing elastomer composition|
JP5326332B2|2008-04-11|2013-10-30|ヤマハ株式会社|Speaker device, signal processing method and program|
TWI496479B|2008-09-03|2015-08-11|Dolby Lab Licensing Corp|Enhancing the reproduction of multiple audio channels|
CA2744459C|2008-12-15|2016-06-14|Dolby Laboratories Licensing Corporation|Surround sound virtualizer and method with dynamic range compression|
KR101295848B1|2008-12-17|2013-08-12|삼성전자주식회사|Apparatus for focusing the sound of array speaker system and method thereof|
US8848952B2|2009-05-11|2014-09-30|Panasonic Corporation|Audio reproduction apparatus|
JP5540581B2|2009-06-23|2014-07-02|ソニー株式会社|Audio signal processing apparatus and audio signal processing method|
JP5757945B2|2009-08-21|2015-08-05|リアリティー・アイ・ピィ・プロプライエタリー・リミテッドReality Ip Pty Ltd|Loudspeaker system for reproducing multi-channel sound with improved sound image|
CN102595153A|2011-01-13|2012-07-18|承景科技股份有限公司|Display system for dynamically supplying three-dimensional sound effects and relevant method|KR20120132342A|2011-05-25|2012-12-05|삼성전자주식회사|Apparatus and method for removing vocal signal|
KR101901908B1|2011-07-29|2018-11-05|삼성전자주식회사|Method for processing audio signal and apparatus for processing audio signal thereof|
WO2013103256A1|2012-01-05|2013-07-11|삼성전자 주식회사|Method and device for localizing multichannel audio signal|
US9794718B2|2012-08-31|2017-10-17|Dolby Laboratories Licensing Corporation|Reflected sound rendering for object-based audio|
CA3031476C|2012-12-04|2021-03-09|Samsung Electronics Co., Ltd.|Audio providing apparatus and audio providing method|
US9549276B2|2013-03-29|2017-01-17|Samsung Electronics Co., Ltd.|Audio apparatus and audio providing method thereof|
KR102160519B1|2013-04-26|2020-09-28|소니 주식회사|Audio processing device, method, and recording medium|
CN108064014B|2013-04-26|2020-11-06|索尼公司|Sound processing device|
US9445197B2|2013-05-07|2016-09-13|Bose Corporation|Signal processing for a headrest-based audio system|
EP2830327A1|2013-07-22|2015-01-28|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Audio processor for orientation-dependent processing|
KR102231755B1|2013-10-25|2021-03-24|삼성전자주식회사|Method and apparatus for 3D sound reproducing|
CN107464553B|2013-12-12|2020-10-09|株式会社索思未来|Game device|
KR102160254B1|2014-01-10|2020-09-25|삼성전자주식회사|Method and apparatus for 3D sound reproducing using active downmix|
WO2015147530A1|2014-03-24|2015-10-01|삼성전자 주식회사|Method and apparatus for rendering acoustic signal, and computer-readable recording medium|
KR20210114558A|2014-04-11|2021-09-23|삼성전자주식회사|Method and apparatus for rendering sound signal, and computer-readable recording medium|
WO2015199508A1|2014-06-26|2015-12-30|삼성전자 주식회사|Method and device for rendering acoustic signal, and computer-readable recording medium|
EP2975864B1|2014-07-17|2020-05-13|Alpine Electronics, Inc.|Signal processing apparatus for a vehicle sound system and signal processing method for a vehicle sound system|
KR20160122029A|2015-04-13|2016-10-21|삼성전자주식회사|Method and apparatus for processing audio signal based on speaker information|
US10327067B2|2015-05-08|2019-06-18|Samsung Electronics Co., Ltd.|Three-dimensional sound reproduction method and device|
CN105187625B|2015-07-13|2018-11-16|努比亚技术有限公司|A kind of electronic equipment and audio-frequency processing method|
ES2883874T3|2015-10-26|2021-12-09|Fraunhofer Ges Forschung|Apparatus and method for generating a filtered audio signal by performing elevation rendering|
JP2019518373A|2016-05-06|2019-06-27|ディーティーエス・インコーポレイテッドDTS,Inc.|Immersive audio playback system|
US10979844B2|2017-03-08|2021-04-13|Dts, Inc.|Distributed audio virtualization systems|
US10397724B2|2017-03-27|2019-08-27|Samsung Electronics Co., Ltd.|Modifying an apparent elevation of a sound source utilizing second-order filter sections|
US11140509B2|2019-08-27|2021-10-05|Daniel P. Anagnos|Head-tracking methodology for headphones and headsets|
法律状态:
2018-12-26| B06F| Objections, documents and/or translations needed after an examination request according art. 34 industrial property law|
2020-07-07| B09A| Decision: intention to grant|
2020-11-17| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 06/07/2011, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US36201410P| true| 2010-07-07|2010-07-07|
US61/362,014|2010-07-07|
KR1020100137232A|KR20120004909A|2010-07-07|2010-12-28|Method and apparatus for 3d sound reproducing|
KR10-2010-0137232|2010-12-28|
KR10-2011-0034415|2011-04-13|
KR1020110034415A|KR101954849B1|2010-07-07|2011-04-13|Method and apparatus for 3D sound reproducing|
PCT/KR2011/004937|WO2012005507A2|2010-07-07|2011-07-06|3d sound reproducing method and apparatus|
[返回顶部]